68 research outputs found

    A Highly Consistent Framework for the Evolution of the Star-Forming "Main Sequence" from z~0-6

    Get PDF
    Using a compilation of 25 studies from the literature, we investigate the evolution of the star-forming galaxy (SFG) Main Sequence (MS) in stellar mass and star formation rate (SFR) out to z6z \sim 6. After converting all observations to a common set of calibrations, we find a remarkable consensus among MS observations (0.1\sim 0.1 dex 1σ\sigma interpublication scatter). By fitting for time evolution of the MS in bins of constant mass, we deconvolve the observed scatter about the MS within each observed redshift bins. After accounting for observed scatter between different SFR indicators, we find the width of the MS distribution is 0.2\sim 0.2 dex and remains constant over cosmic time. Our best fits indicate the slope of the MS is likely time-dependent, with our best fit logSFR(M,t)=(0.84±0.020.026±0.003×t)logM(6.51±0.240.11±0.03×t)\log\textrm{SFR}(M_*,t) = \left(0.84 \pm 0.02 - 0.026 \pm 0.003 \times t\right) \log M_* - \left(6.51 \pm 0.24 - 0.11 \pm 0.03 \times t\right), with tt the age of the Universe in Gyr. We use our fits to create empirical evolutionary tracks in order to constrain MS galaxy star formation histories (SFHs), finding that (1) the most accurate representations of MS SFHs are given by delayed-τ\tau models, (2) the decline in fractional stellar mass growth for a "typical" MS galaxy today is approximately linear for most of its lifetime, and (3) scatter about the MS can be generated by galaxies evolving along identical evolutionary tracks assuming an initial 1σ1\sigma spread in formation times of 1.4\sim 1.4 Gyr.Comment: 59 pages, 10 tables, 12 figures, accepted to ApJS; v2, slight changes to text, added new figure and fit

    A Novel Application of Conditional Normalizing Flows: Stellar Age Inference with Gyrochronology

    Full text link
    Stellar ages are critical building blocks of evolutionary models, but challenging to measure for low mass main sequence stars. An unexplored solution in this regime is the application of probabilistic machine learning methods to gyrochronology, a stellar dating technique that is uniquely well suited for these stars. While accurate analytical gyrochronological models have proven challenging to develop, here we apply conditional normalizing flows to photometric data from open star clusters, and demonstrate that a data-driven approach can constrain gyrochronological ages with a precision comparable to other standard techniques. We evaluate the flow results in the context of a Bayesian framework, and show that our inferred ages recover literature values well. This work demonstrates the potential of a probabilistic data-driven solution to widen the applicability of gyrochronological stellar dating.Comment: Accepted at the ICML 2023 Workshop on Machine Learning for Astrophysics. 10 pages, 3 figures (+1 in appendices

    Exploring Photometric Redshifts as an Optimization Problem: An Ensemble MCMC and Simulated Annealing-Driven Template-Fitting Approach

    Get PDF
    Using a grid of 2\sim 2 million elements (Δz=0.005\Delta z = 0.005) adapted from COSMOS photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound rapid but "greedy" optimization schemes, even with additional stochastic sampling methods. In order to robustly and efficiently explore these surfaces, we develop BAD-Z [Brisk Annealing-Driven Redshifts (Z)], which combines ensemble Markov Chain Monte Carlo (MCMC) sampling with simulated annealing to sample arbitrarily large, pre-generated grids in approximately constant time. Using a mock catalog of 384,662 objects, we show BAD-Z samples 40\sim 40 times more efficiently compared to a brute-force counterpart while maintaining similar levels of accuracy. Our results represent first steps toward designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.Comment: 14 pages, 8 figures; submitted to MNRAS; comments welcom

    Monte Carlo Techniques for Addressing Large Errors and Missing Data in Simulation-based Inference

    Full text link
    Upcoming astronomical surveys will observe billions of galaxies across cosmic time, providing a unique opportunity to map the many pathways of galaxy assembly to an incredibly high resolution. However, the huge amount of data also poses an immediate computational challenge: current tools for inferring parameters from the light of galaxies take 10\gtrsim 10 hours per fit. This is prohibitively expensive. Simulation-based Inference (SBI) is a promising solution. However, it requires simulated data with identical characteristics to the observed data, whereas real astronomical surveys are often highly heterogeneous, with missing observations and variable uncertainties determined by sky and telescope conditions. Here we present a Monte Carlo technique for treating out-of-distribution measurement errors and missing data using standard SBI tools. We show that out-of-distribution measurement errors can be approximated by using standard SBI evaluations, and that missing data can be marginalized over using SBI evaluations over nearby data realizations in the training set. While these techniques slow the inference process from 1\sim 1 sec to 1.5\sim 1.5 min per object, this is still significantly faster than standard approaches while also dramatically expanding the applicability of SBI. This expanded regime has broad implications for future applications to astronomical surveys.Comment: 8 pages, 2 figures, accepted to the Machine Learning and the Physical Sciences workshop at NeurIPS 202
    corecore